Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20.433
Filter
2.
BMC Med Educ ; 24(1): 448, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38658906

ABSTRACT

OBJECTIVES: This study aimed to investigate the utility of the RAND/UCLA appropriateness method (RAM) in validating expert consensus-based multiple-choice questions (MCQs) on electrocardiogram (ECG). METHODS: According to the RAM user's manual, nine panelists comprising various experts who routinely handle ECGs were asked to reach a consensus in three phases: a preparatory phase (round 0), an online test phase (round 1), and a face-to-face expert panel meeting (round 2). In round 0, the objectives and future timeline of the study were elucidated to the nine expert panelists with a summary of relevant literature. In round 1, 100 ECG questions prepared by two skilled cardiologists were answered, and the success rate was calculated by dividing the number of correct answers by 9. Furthermore, the questions were stratified into "Appropriate," "Discussion," or "Inappropriate" according to the median score and interquartile range (IQR) of appropriateness rating by nine panelists. In round 2, the validity of the 100 ECG questions was discussed in an expert panel meeting according to the results of round 1 and finally reassessed as "Appropriate," "Candidate," "Revision," and "Defer." RESULTS: In round 1 results, the average success rate of the nine experts was 0.89. Using the median score and IQR, 54 questions were classified as " Discussion." In the expert panel meeting in round 2, 23% of the original 100 questions was ultimately deemed inappropriate, although they had been prepared by two skilled cardiologists. Most of the 46 questions categorized as "Appropriate" using the median score and IQR in round 1 were considered "Appropriate" even after round 2 (44/46, 95.7%). CONCLUSIONS: The use of the median score and IQR allowed for a more objective determination of question validity. The RAM may help select appropriate questions, contributing to the preparation of higher-quality tests.


Subject(s)
Electrocardiography , Humans , Consensus , Reproducibility of Results , Clinical Competence/standards , Educational Measurement/methods , Cardiology/standards
4.
Curr Opin Anaesthesiol ; 37(3): 259-265, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38573182

ABSTRACT

PURPOSE OF REVIEW: To discuss considerations surrounding the use of point-of-care ultrasound (POCUS) in pediatric anesthesiology. RECENT FINDINGS: POCUS is an indispensable tool in various medical specialties, including pediatric anesthesiology. Credentialing for POCUS should be considered to ensure that practitioners are able to acquire images, interpret them correctly, and use ultrasound to guide procedures safely and effectively. In the absence of formal guidelines for anesthesiology, current practice and oversight varies by institution. In this review, we will explore the significance of POCUS in pediatric anesthesiology, discuss credentialing, and compare the specific requirements and challenges currently associated with using POCUS in pediatric anesthesia. SUMMARY: Point-of-care ultrasound is being utilized by the pediatric anesthesiologist and has the potential to improve patient assessment, procedure guidance, and decision-making. Guidelines increase standardization and quality assurance procedures help maintain high-quality data. Credentialing standards for POCUS in pediatric anesthesiology are essential to ensure that practitioners have the necessary skills and knowledge to use this technology effectively and safely. Currently, there are no national pediatric POCUS guidelines to base credentialing processes on for pediatric anesthesia practices. Further work directed at establishing pediatric-specific curriculum goals and competency standards are needed to train current and future pediatric anesthesia providers and increase overall acceptance of POCUS use.


Subject(s)
Anesthesiology , Clinical Competence , Credentialing , Pediatrics , Point-of-Care Systems , Ultrasonography , Humans , Anesthesiology/education , Anesthesiology/standards , Credentialing/standards , Point-of-Care Systems/standards , Child , Pediatrics/education , Pediatrics/standards , Pediatrics/methods , Ultrasonography/standards , Ultrasonography/methods , Clinical Competence/standards , Ultrasonography, Interventional/standards , Ultrasonography, Interventional/methods
5.
JAMA ; 331(9): 727-728, 2024 03 05.
Article in English | MEDLINE | ID: mdl-38315157

ABSTRACT

This Viewpoint discusses the ABIM's continuing efforts to innovate and streamline maintenance of certification, including the recently launched Longitudinal Knowledge Assessment (LKA), to better accommodate physicians' schedules and desires for flexibility.


Subject(s)
Certification , Clinical Competence , Physicians , Humans , Certification/methods , Certification/standards , Certification/trends , Clinical Competence/standards , Education, Medical, Continuing/standards , Physicians/standards , United States
6.
J Clin Nurs ; 33(6): 2069-2083, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38413769

ABSTRACT

BACKGROUND: Evidence-based healthcare (EBHC) enables consistent and effective healthcare that prioritises patient safety. The competencies of advanced practice nurses (APNs) are essential for implementing EBHC because their professional duties include promoting EBHC. AIM: To identify, critically appraise, and synthesise the best available evidence concerning the EBHC competence of APNs and associated factors. DESIGN: A systematic review. DATA SOURCES: CINAHL, PubMed, Scopus, Medic, ProQuest, and MedNar. METHODS: Databases were searched for studies (until 19 September 2023) that examined the EBHC competence and associated factors of APNs were included. Quantitative studies published in English, Swedish and Finnish were included. We followed the JBI methodology for systematic review and performed a narrative synthesis. RESULTS: The review included 12 quantitative studies, using 15 different instruments, and involved 3163 participants. The quality of the studies was fair. The APNs' EBHC competence areas were categorised into five segments according to the JBI EBHC model. The strongest areas of competencies were in global health as a goal, transferring and implementing evidence, while the weakest were generating and synthesising evidence. Evidence on factors influencing APNs' EBHC competencies was contradictory, but higher levels of education and the presence of an organisational research council may be positively associated with APNs' EBHC competencies. CONCLUSION: The development of EBHC competencies for APNs should prioritise evidence generation and synthesis. Elevating the education level of APNs and establishing a Research Council within the organisation can potentially enhance the EBHC competence of APNs. IMPLICATIONS FOR THE PROFESSION: We should consider weaknesses in EBHC competence when developing education and practical exercises for APNs. This approach will promote the development of APNs' EBHC competence and EBHC implementation in nursing practice. REGISTRATION, AND REPORTING CHECKLIST: The review was registered in PROSPERO (CRD42021226578), and reporting followed the PRISMA checklist. PATIENT/PUBLIC CONTRIBUTION: None.


Subject(s)
Advanced Practice Nursing , Clinical Competence , Humans , Clinical Competence/standards , Evidence-Based Nursing , Evidence-Based Practice , Adult
7.
Acad Med ; 99(5): 524-533, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38207056

ABSTRACT

PURPOSE: Given the increasing significance and potential impact of artificial intelligence (AI) technology on health care delivery, there is an increasing demand to integrate AI into medical school curricula. This study aimed to define medical AI competencies and identify the essential competencies for medical graduates in South Korea. METHOD: An initial Delphi survey conducted in 2022 involving 4 groups of medical AI experts (n = 28) yielded 42 competency items. Subsequently, an online questionnaire survey was carried out with 1,955 participants (1,174 students and 781 professors) from medical schools across South Korea, utilizing the list of 42 competencies developed from the first Delphi round. A subsequent Delphi survey was conducted with 33 medical educators from 21 medical schools to differentiate the essential AI competencies from the optional ones. RESULTS: The study identified 6 domains encompassing 36 AI competencies essential for medical graduates: (1) understanding digital health and changes driven by AI; (2) fundamental knowledge and skills in medical AI; (3) ethics and legal aspects in the use of medical AI; (4) medical AI application in clinical practice; (5) processing, analyzing, and evaluating medical data; and (6) research and development of medical AI, as well as subcompetencies within each domain. While numerous competencies within the first 4 domains were deemed essential, a higher percentage of experts indicated responses in the last 2 domains, data science and medical AI research and development, were optional. CONCLUSIONS: This medical AI framework of 6 competencies and their subcompetencies for medical graduates exhibits promising potential for guiding the integration of AI into medical curricula. Further studies conducted in diverse contexts and countries are necessary to validate and confirm the applicability of these findings. Additional research is imperative for developing specific and feasible educational models to integrate these proposed competencies into pre-existing curricula.


Subject(s)
Artificial Intelligence , Curriculum , Delphi Technique , Schools, Medical , Students, Medical , Republic of Korea , Humans , Surveys and Questionnaires , Curriculum/standards , Schools, Medical/standards , Students, Medical/statistics & numerical data , Male , Female , Clinical Competence/standards , Adult , Faculty, Medical
8.
Acad Med ; 99(5): 534-540, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38232079

ABSTRACT

PURPOSE: Learner development and promotion rely heavily on narrative assessment comments, but narrative assessment quality is rarely evaluated in medical education. Educators have developed tools such as the Quality of Assessment for Learning (QuAL) tool to evaluate the quality of narrative assessment comments; however, scoring the comments generated in medical education assessment programs is time intensive. The authors developed a natural language processing (NLP) model for applying the QuAL score to narrative supervisor comments. METHOD: Samples of 2,500 Entrustable Professional Activities assessments were randomly extracted and deidentified from the McMaster (1,250 comments) and Saskatchewan (1,250 comments) emergency medicine (EM) residency training programs during the 2019-2020 academic year. Comments were rated using the QuAL score by 25 EM faculty members and 25 EM residents. The results were used to develop and test an NLP model to predict the overall QuAL score and QuAL subscores. RESULTS: All 50 raters completed the rating exercise. Approximately 50% of the comments had perfect agreement on the QuAL score, with the remaining resolved by the study authors. Creating a meaningful suggestion for improvement was the key differentiator between high- and moderate-quality feedback. The overall QuAL model predicted the exact human-rated score or 1 point above or below it in 87% of instances. Overall model performance was excellent, especially regarding the subtasks on suggestions for improvement and the link between resident performance and improvement suggestions, which achieved 85% and 82% balanced accuracies, respectively. CONCLUSIONS: This model could save considerable time for programs that want to rate the quality of supervisor comments, with the potential to automatically score a large volume of comments. This model could be used to provide faculty with real-time feedback or as a tool to quantify and track the quality of assessment comments at faculty, rotation, program, or institution levels.


Subject(s)
Competency-Based Education , Internship and Residency , Natural Language Processing , Humans , Competency-Based Education/methods , Internship and Residency/standards , Clinical Competence/standards , Narration , Educational Measurement/methods , Educational Measurement/standards , Emergency Medicine/education , Faculty, Medical/standards
9.
Br J Clin Psychol ; 63(2): 213-226, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38235902

ABSTRACT

OBJECTIVE: Psychological formulation is a key competency for clinical psychologists. However, there is a lack of consensus regarding the key components and processes of formulation that are hypothesized to contribute to poor reliability of formulations. The aim of this study was to develop consensus on the essential components of a formulation to inform training for clinical psychologists and best practice guidelines. METHODS: A Delphi methodology was used. Items were generated from the literature and discussed and refined with a panel of experts (n = 10). In round one, 110 clinical psychologists in the United Kingdom rated the importance of components of formulation via an online questionnaire. Criteria for consensus were applied and statements were rerated in round two if consensus was not achieved. RESULTS: Consensus was achieved on 30 items, with 18 statements regarding components of a formulation and 12 statements regarding formulation process. Items that clinicians agreed upon emphasized the importance of integrating sociocultural, biological, strengths and personal meaning alongside well-established theoretical frameworks. Consensus was not reached on 20 items, including whether a formulation should be parsimonious or adhere to a model. CONCLUSION: Our findings provide mixed evidence regarding consensus on the key components of formulation. There was an agreement that formulation should be client-led and incorporate strengths and sociocultural factors. Further research should explore client perspectives on the key components of formulation and how these compare to the clinicians' perspectives.


Subject(s)
Consensus , Delphi Technique , Psychology, Clinical , Humans , Psychology, Clinical/education , Psychology, Clinical/standards , Adult , Female , United Kingdom , Male , Middle Aged , Clinical Competence/standards , Surveys and Questionnaires
10.
Acad Med ; 99(5): 513-517, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38113414

ABSTRACT

PROBLEM: Narrative assessments are commonly incorporated into competency-based medical education programs. However, efforts to share competency-based medical education assessment data among programs to support the evaluation and improvement of assessment systems have been limited in part because of security concerns. Deidentifying assessment data mitigates these concerns, but deidentifying narrative assessments is time-consuming, resource intensive, and error prone. The authors developed and tested a tool to automate the deidentification of narrative assessments and facilitate their review. APPROACH: The authors met throughout 2021 and 2022 to iteratively design, test, and refine the deidentification algorithm and data review interface. Preliminary testing of the prototype deidentification algorithm was performed using narrative assessments from the University of Saskatchewan emergency medicine program. The algorithm's accuracy was assessed by the authors using the review interface designed for this purpose. Formal testing included 2 rounds of deidentification and review by members of the authorship team. Both the algorithm and data review interface were refined during the testing process. OUTCOMES: Authors from 3 institutions, including 3 emergency medicine programs, an anesthesia program, and a surgical program, participated in formal testing. In the final round of review, 99.4% of the narrative assessments were fully deidentified (names, nicknames, and pronouns removed). The results were comparable for each institution and specialty. The data review interface was improved with feedback obtained after each round of review and found to be intuitive. NEXT STEPS: This innovation has demonstrated viability evidence of an algorithmic approach to the deidentification of assessment narratives while reinforcing that a small number of errors are likely to persist. Future steps include the refinement of both the algorithm to improve its accuracy and the data review interface to support additional data set formats.


Subject(s)
Algorithms , Humans , Information Dissemination/methods , Education, Medical/methods , Narration , Competency-Based Education/methods , Emergency Medicine/education , Educational Measurement/methods , Clinical Competence/standards , Saskatchewan
11.
Arthritis Care Res (Hoboken) ; 76(5): 600-607, 2024 May.
Article in English | MEDLINE | ID: mdl-38108087

ABSTRACT

Starting in 2015, pediatric rheumatology fellowship training programs were required by the Accreditation Council for Graduate Medical Education to assess fellows' academic performance within 21 subcompetencies falling under six competency domains. Each subcompetency had four or five milestone levels describing developmental progression of knowledge and skill acquisition. Milestones were standardized across all pediatric subspecialties. As part of the Milestones 2.0 revision project, the Accreditation Council for Graduate Medical Education convened a workgroup in 2022 to write pediatric rheumatology-specific milestones. Using adult rheumatology's Milestones 2.0 as a starting point, the workgroup revised the patient care and medical knowledge subcompetencies and milestones to reflect requirements and nuances of pediatric rheumatology care. Milestones within four remaining competency domains (professionalism, interpersonal and communication skills, practice-based learning and improvement, and systems-based practice) were standardized across all pediatric subspecialties, and therefore not revised. The workgroup created a supplemental guide with explanations of the intent of each subcompetency, 25 in total, and examples for each milestone level. The new milestones are an important step forward for competency-based medical education in pediatric rheumatology. However, challenges remain. Milestone level assignment is meant to be informed by results of multiple assessment methods. The lack of pediatric rheumatology-specific assessment tools typically result in clinical competency committees determining trainee milestone levels without such collated results as the foundation of their assessments. Although further advances in pediatric rheumatology fellowship competency-based medical education are needed, Milestones 2.0 importantly establishes the first pediatric-specific rheumatology Milestones to assess fellow performance during training and help measure readiness for independent practice.


Subject(s)
Clinical Competence , Education, Medical, Graduate , Fellowships and Scholarships , Pediatrics , Rheumatology , Rheumatology/education , Rheumatology/standards , Humans , Clinical Competence/standards , Education, Medical, Graduate/standards , Pediatrics/education , Pediatrics/standards
14.
Rev. patol. respir ; 26(4)oct.-dic. 2023. tab
Article in Spanish | IBECS | ID: ibc-228619

ABSTRACT

La inercia clínica se define como los fallos del médico en el inicio o la intensificación del tratamiento cuando están indicados. Nuestro objetivo es reflexionar sobre este concepto aplicado en enfermedad pulmonar obstructiva crónica y asma, y el papel del profesional sanitario y del sistema de salud como actores implicados. Dejamos aparte la inercia del paciente para otro ámbito de estudio e intervención. Proponemos definir la inercia clínica para procesos durante el diagnóstico y el tratamiento cuando no se inicia o modifica (intensifica o disminuye) una terapia. También se identifican los factores que contribuyen a la inercia clínica o terapéutica y se plantean estrategias de mejora. (AU)


Clinical inertia is defined as the physician’s failure to initiate or intensify treatment when it is indicated. Our objective is to reflect on this concept applied to chronic obstructive pulmonary disease and asthma, and the role of health professional and health system as stakeholders. We leave patient inertia aside for another area of study and intervention. We propose to define clinical inertia for diagnosis and therapeutic processes when a treatment is not started or modified (intensifies or decreases). Factors that contribute to clinical and/or therapeutic inertia are also identified and improvement strategies are proposed. (AU)


Subject(s)
Humans , Pulmonary Disease, Chronic Obstructive , Asthma , Clinical Competence/standards , Pulmonary Medicine , Professional Role
15.
JAMA ; 330(14): 1329-1330, 2023 10 10.
Article in English | MEDLINE | ID: mdl-37738250

ABSTRACT

This Viewpoint examines the demands of maintenance of certification (MOC) requirements from the ABIM on balance with the projected benefits to quality of patient care.


Subject(s)
Clinical Competence , Specialty Boards , Certification/standards , Clinical Competence/standards , Education, Medical, Continuing/standards , Specialty Boards/standards , United States
16.
Med J (Ft Sam Houst Tex) ; (Per 23-4/5/6): 39-49, 2023.
Article in English | MEDLINE | ID: mdl-37042505

ABSTRACT

INTRODUCTION: Military first responders are in a unique category of the healthcare delivery system. They range in skill sets from combat medic and corpsman to nurses, physician assistants, and occasionally, doctors. Airway obstruction is the second leading cause of preventable battlefield death, and the decision for intervention to obtain an airway depends on the casualty's presentation, the provider's comfort level, and the available equipment, among many other variables. In the civilian prehospital setting cricothyroidotomy (cric) success rates are over 90%, but in the US military combat environment success rates range from 0-82%. This discrepancy in success rates may be due to training, environment, equipment, patient factors and/or a combination of these. Many presumed causes have been assumed to be the root of the variability, but no research has been conducted evaluating the first-person point of view. This research study is focused on interviewing military first responders with real-life combat placement of a surgical airway to identify the underlying influences which contribute to their perception of success or failure. MATERIALS AND METHODS: We conducted a qualitative study with in-depth semi-structured interviews to understand participants' real-life cric experiences. The interview questions were developed based on the Critical Incident Questionnaire. In total, there were 11 participants-4 retired military and 7 active-duty service members. RESULTS: Nine themes were generated from the 11 interviews conducted. These themes can be categorized into 2 groups: factors internal to the provider, which we have called intrinsic influences, and factors external to the provider, which we call extrinsic influences. Intrinsic influences include personal well-being, confidence, experience, and decision-making. Extrinsic influences include training, equipment, assistance, environment, and patient factors. CONCLUSIONS: This study revealed practitioners in combat settings felt the need to train more frequently in a stepwise fashion while following a well-understood airway management algorithm. More focus must be on utilizing live tissue with biological feedback, but only after anatomy and geospatial orientation are well understood on models, mannequins, and cadavers. The equipment utilized in training must be the equipment available in the field. Lastly, the focus of the training should be on scenarios which stress the physical and mental capabilities of the providers. A true test of both self-efficacy and deliberate practice is forced through the intrinsic and extrinsic findings from the qualitative data. All of these steps must be overseen by expert practitioners. Another key is providing more time to focus on medical skills development, which is critical to overall confidence and overcoming hesitation in the decision-making process. This is even more specific to those who are least medically trained and the most likely to encounter the casualty first, EMT-Basic level providers. If possible, increasing the number of medical providers at the point of injury would achieve multiple goals under the self-efficacy learning theory. Assistance would instill confidence in the practitioner, help with the ability to prioritize patients quickly, decrease anxiety, and decrease hesitation to perform in the combat environment.


Subject(s)
Airway Management , Airway Obstruction , Clinical Competence , Emergency Responders , Military Personnel , Humans , Airway Management/methods , Airway Management/psychology , Airway Management/standards , Airway Obstruction/etiology , Airway Obstruction/surgery , Airway Obstruction/therapy , Emergency Medical Services/methods , Emergency Medical Services/standards , Military Personnel/education , Military Personnel/psychology , Emergency Responders/education , Emergency Responders/psychology , Clinical Competence/standards
17.
Article in English | MEDLINE | ID: mdl-37047941

ABSTRACT

No validated instrument is available for assessing the evidence-based practice capacity of Vietnamese health professionals. This study aimed to translate and validate the Health Sciences Evidence-Based Practice questionnaire (HS-EBP) from English to Vietnamese and ascertain its psychometric properties. Data were collected from two obstetric hospitals in Vietnam. Participants: A total of 343 midwives were randomly selected. The HS-EBP questionnaire was translated by a group of bilingual experts into Vietnamese (HS-EBP-V). Content validity was assessed by two experts. Internal consistency and test-retest reliabilities were assessed using Cronbach's α and intraclass correlation (ICC), respectively. Construct validity was assessed using the contrasted groups approach. As a result, the content validity index of the HS-EBP-V reached 1.0. For the individual subscales, Cronbach's α was 0.92-0.97 and ICC was between 0.45 and 0.66. The validity of the contrasted-groups approach showed discrimination by a significant difference in the subscale scores among diploma holders compared with bachelor's degree holders (p < 0.001). The validation of the HS-EBP questionnaire indicated satisfactory psychometric properties. The results indicate that the HS-EBP is a reliable and valid instrument which assesses the competencies of as well as facilitators of and barriers to the five steps of EBP among midwives. The HS-EBP-V was deemed a reliable and validated tool for assessing the competency and application of EBP among Vietnamese healthcare professionals.


Subject(s)
Evidence-Based Practice , Hospitals, Maternity , Midwifery , Surveys and Questionnaires , Translating , Humans , Evidence-Based Practice/standards , Psychometrics/methods , Reproducibility of Results , Surveys and Questionnaires/standards , Vietnam , Midwifery/standards , Hospitals, Maternity/standards , Clinical Competence/standards
18.
Public Health Res Pract ; 33(1)2023 Mar 15.
Article in English | MEDLINE | ID: mdl-36918391

ABSTRACT

In the modern era, evidence-based medicine (EBM) has been embraced as the best approach to practising medicine, providing clinicians with 'objective' evidence from clinical research. However, for presentations with complex pathophysiology or from complex social environments, sometimes there remains no evidence, and no amount of research will obtain it. Yet, health researchers continue to undertake randomised controlled trials (RCT) in complex environments, ignoring the risk that participants' health may be compromised throughout the trial process. This paper examines the role of research that seeks to obtain evidence to support EBM. We provide examples of RCTs on ear disease in Aboriginal populations as a case-in-point. Decades of ear research have failed to yield statistically significant findings, demonstrating that when multiple factors are at play, study designs struggle to balance the known disease process drivers, let alone unknown drivers. This paper asks the reader to consider if the pursuit of research is likely to produce evidence in complex situations; or if perhaps RCTs should not be undertaken in these situations. Instead, clinicians could apply empirical evidence, tailoring treatments to individuals while taking into account the complexities of their life circumstances.


Subject(s)
Clinical Competence , Delivery of Health Care , Empirical Research , Evidence-Based Medicine , Patient Care , Randomized Controlled Trials as Topic , Humans , Australian Aboriginal and Torres Strait Islander Peoples , Clinical Competence/standards , Delivery of Health Care/standards , Ear Diseases , Evidence-Based Medicine/standards , Patient Care/standards , Randomized Controlled Trials as Topic/standards , Research Design/standards
19.
J Ment Health ; 32(4): 779-786, 2023 Aug.
Article in English | MEDLINE | ID: mdl-35766312

ABSTRACT

BACKGROUND: Despite demonstrating positive outcomes in education, academic positions for Experts by Experience in mental health have not been widely implemented. To date positions have been driven by individual champions (allies). Their motivation for this support has not yet been researched. AIMS: To deepen understanding of motivations of mental health academics who have championed and supported implementation of EBE positions. METHODS: A Qualitative exploratory, study was undertaken involving in-depth individual interviews with 16 academics with experience of actively supporting the implementation of Expert by Experience positions in academia. Data were analysed independently by two researchers using a structured thematic framework. RESULTS: Motivations commonly arose from allies' own experiences of working with or exposure to Experts by Experience. Other motivating factors included: belief in the value of specific knowledge and expertise Experts by Experience contributed to mental health education; and, identifying the essential role Experts by Experience play in meeting policy expectations, and the broader philosophy of the university. CONCLUSIONS: The motivations identified by allies in this study have implications for Expert by Experience roles. Deeper understanding of motivations to support these roles is essential to arguing for their value, and ultimately producing positive outcomes in the education of health professionals.


Subject(s)
Clinical Competence , Health Personnel , Mental Health , Motivation , Qualitative Research , Health Personnel/education , Health Personnel/psychology , Mental Health/education , Humans , Interviews as Topic , Research Personnel , Clinical Competence/standards , Universities , Students , Stakeholder Participation , Australia , New Zealand , Ireland , Male , Female , Social Workers , Psychiatric Nursing , Psychiatry , Data Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...